46 research outputs found

    Context-sensitive autoassociative memories as expert systems in medical diagnosis

    Get PDF
    BACKGROUND: The complexity of our contemporary medical practice has impelled the development of different decision-support aids based on artificial intelligence and neural networks. Distributed associative memories are neural network models that fit perfectly well to the vision of cognition emerging from current neurosciences. METHODS: We present the context-dependent autoassociative memory model. The sets of diseases and symptoms are mapped onto a pair of basis of orthogonal vectors. A matrix memory stores the associations between the signs and symptoms, and their corresponding diseases. A minimal numerical example is presented to show how to instruct the memory and how the system works. In order to provide a quick appreciation of the validity of the model and its potential clinical relevance we implemented an application with real data. A memory was trained with published data of neonates with suspected late-onset sepsis in a neonatal intensive care unit (NICU). A set of personal clinical observations was used as a test set to evaluate the capacity of the model to discriminate between septic and non-septic neonates on the basis of clinical and laboratory findings. RESULTS: We show here that matrix memory models with associations modulated by context can perform automatic medical diagnosis. The sequential availability of new information over time makes the system progress in a narrowing process that reduces the range of diagnostic possibilities. At each step the system provides a probabilistic map of the different possible diagnoses to that moment. The system can incorporate the clinical experience, building in that way a representative database of historical data that captures geo-demographical differences between patient populations. The trained model succeeds in diagnosing late-onset sepsis within the test set of infants in the NICU: sensitivity 100%; specificity 80%; percentage of true positives 91%; percentage of true negatives 100%; accuracy (true positives plus true negatives over the totality of patients) 93,3%; and Cohen's kappa index 0,84. CONCLUSION: Context-dependent associative memories can operate as medical expert systems. The model is presented in a simple and tutorial way to encourage straightforward implementations by medical groups. An application with real data, presented as a primary evaluation of the validity and potentiality of the model in medical diagnosis, shows that the model is a highly promising alternative in the development of accuracy diagnostic tools

    Geometric representations for minimalist grammars

    Full text link
    We reformulate minimalist grammars as partial functions on term algebras for strings and trees. Using filler/role bindings and tensor product representations, we construct homomorphisms for these data structures into geometric vector spaces. We prove that the structure-building functions as well as simple processors for minimalist languages can be realized by piecewise linear operators in representation space. We also propose harmony, i.e. the distance of an intermediate processing step from the final well-formed state in representation space, as a measure of processing complexity. Finally, we illustrate our findings by means of two particular arithmetic and fractal representations.Comment: 43 pages, 4 figure

    Graphical and numerical representations of DNA sequences: statistical aspects of similarity

    Full text link

    Context-dependent associations in linear distributed memories

    No full text

    Resolvin D2 Restrains Th1 Immunity and Prevents Alveolar Bone Loss in Murine Periodontitis

    No full text
    Periodontitis is an infectious inflammatory disease of the supporting structures of the teeth. Resolvins are part of a large family of specialized pro-resolving lipid mediators that enhance active resolution of inflammation and return of inflammatory lesions to homeostasis. In this paper, we demonstrate that resolvin D2 (RvD2), a product of docosahexaenoic acid (DHA) metabolism, prevents alveolar bone loss in Porphyromonas gingivalis-induced experimental periodontitis. Investigations of the immune mechanism of RvD2 actions reveal that 6 weeks after infection, the gingiva of RvD2-treated mice exhibit decreased CD4+ T-cells as well as lower RANKL expression levels and higher osteoprotegerin expression levels. Systemically, RvD2 prevents chronic secretion of IFN-γ and rapidly restores IFN-α levels, without dampening the P. gingivalis-specific immune response. In the gingiva, immediately after P. gingivalis inoculation, RvD2 regulates the mRNA expression of IFN-γ, IL-1β, TNF-α, and IL-10, hence contributing to maintaining local homeostasis. Moreover, RvD2 treatment reduces local neutrophil numbers, whereas pro-resolving macrophage counts were increased. These findings suggest that RvD2 resolves innate inflammatory responses, inhibiting systemic and gingival Th1-type adaptive responses that are known to mediate alveolar bone loss in this model

    Dynamic searching in the brain

    No full text
    Cognitive functions rely on the extensive use of information stored in the brain, and the searching for the relevant information for solving some problem is a very complex task. Human cognition largely uses biological search engines, and we assume that to study cognitive function we need to understand the way these brain search engines work. The approach we favor is to study multi-modular network models, able to solve particular problems that involve searching for information. The building blocks of these multimodular networks are the context dependent memory models we have been using for almost 20 years. These models work by associating an output to the Kronecker product of an input and a context. Input, context and output are vectors that represent cognitive variables. Our models constitute a natural extension of the traditional linear associator. We show that coding the information in vectors that are processed through association matrices, allows for a direct contact between these memory models and some procedures that are now classical in the Information Retrieval field. One essential feature of context-dependent models is that they are based on the thematic packing of information, whereby each context points to a particular set of related concepts. The thematic packing can be extended to multimodular networks involving input-output contexts, in order to accomplish more complex tasks. Contexts act as passwords that elicit the appropriate memory to deal with a query. We also show toy versions of several ‘neuromimetic’ devices that solve cognitive tasks as diverse as decision making or word sense disambiguation. The functioning of these multimodular networks can be described as dynamical systems at the level of cognitive variables
    corecore